16 research outputs found

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF

    Multiclass CBCT image segmentation for orthodontics with deep learning

    Get PDF
    Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 Ā± 0.019, jaw; 0.945 Ā± 0.021, teeth). The MS-D networkā€“based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 Ā± 0.093 mm, jaw; 0.204 Ā± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    No full text
    Computer-assisted surgery (CAS) is a novel treatment modality that allows clinicians to create personalized treatment plans and virtually design implants, surgical guides, and radiotherapy boluses. However, even though CAS is a powerful means to optimize surgery, it currently requires a lot of tedious and time-consuming manual work by experienced medical engineers to ensure the quality of the CAS workflow. This thesis therefore focuses on developing artificial intelligence algorithms, specifically convolutional neural networks (CNNs), to automate two main image processing tasks required in maxillofacial CAS: CT image segmentation and computed tomography (CT) artifact correction. Chapter 2 describes a CNN approach to segment bony structures in 20 different CT scans. All CT were acquired from patients that had previously undergone craniotomy, thus posing a significant challenge for the CNN to correctly recognize the pathological shape of the bones. Even though this segmentation tasks was validated on a challenging dataset, only two regions of interest (i.e., bone or background) were segmented. In order to demonstrate the ability of CNN to take more regions into account, a CNN was developed in Chapter 3 to segment cone-beam computed tomography (CBCT) scans into the jaw, the teeth and background. To date, many different training strategies have been proposed in literature to train CNNs on medical images. However, it remains unclear which strategy yields the best segmentation performances. To elucidate on this topic, eight different CNN training strategies were evaluated and compared in Chapter 4. The CNNs described in this chapter were trained to segment anatomical structures in simulated and experimental cone-beam CT scans. These experiments demonstrated that the best strategy is generally to train three separate CNN and combine their segmentation results through majority voting. In Chapter 5, an approach is described to deal with strong metal artifacts in CBCT scans during the image segmentation step required during CAS. In particular, a mixed-scale dense convolutional neural network (MS-D network) was implemented to segment the bony structures of the mandible and the maxilla. The MS-D network described in this chapter clearly outperformed a widely-used clinical segmentation method (i.e., snake evolution), and produced comparable results to alternative deep learning benchmarks. A novel symmetry-aware deep learning approach is proposed in Chapter 6 to reduce high cone-angle artifacts in CB CT images. In this approach, a CNN was trained using radial cone-beam CT slices to exploit the symmetry of high cone-angle artifacts. This allowed training a CNN to reduce the complex 3D cone-angle artifacts using only 2D slices as input. In this chapter, it is demonstrated that this symmetry-aware dimensionality reduction improves the performance and robustness of CNNs when reducing high cone-angle artifacts in cone-beam CT scans. Deep learning, and specifically CNNs, have found remarkable successes in various mage processing tasks. The goal of Chapter 7 was to review all studies that have been published in which neural network approaches were developed for CT image reconstruction, bone segmentation, and surgical planning. Although various neural network approaches were identified, the majority of studies (66%) applied CNNs. Interestingly, all of these CNNs were published from 2016 onwards, indicating the rapid paradigm shift this field has undergone. Nevertheless, much research is still required to make deep learning an integral part of the CAS workflow. In conclusion, this thesis contributes to enhance the understanding of CNN approaches for medical image processing. Nevertheless, many interesting challenges and questions remain to incorporate CNNā€™s as an integral part of the maxillofacial CAS routine. I therefore hope that this thesis will inspire fellow researchers to take on these exciting challenges

    DEEP LEARNING IN COMPUTER-ASSISTED MAXILLOFACIAL SURGERY

    Get PDF
    Computer-assisted surgery (CAS) is a novel treatment modality that allows clinicians to create personalized treatment plans and virtually design implants, surgical guides, and radiotherapy boluses. However, even though CAS is a powerful means to optimize surgery, it currently requires a lot of tedious and time-consuming manual work by experienced medical engineers to ensure the quality of the CAS workflow. This thesis therefore focuses on developing artificial intelligence algorithms, specifically convolutional neural networks (CNNs), to automate two main image processing tasks required in maxillofacial CAS: CT image segmentation and computed tomography (CT) artifact correction. Chapter 2 describes a CNN approach to segment bony structures in 20 different CT scans. All CT were acquired from patients that had previously undergone craniotomy, thus posing a significant challenge for the CNN to correctly recognize the pathological shape of the bones. Even though this segmentation tasks was validated on a challenging dataset, only two regions of interest (i.e., bone or background) were segmented. In order to demonstrate the ability of CNN to take more regions into account, a CNN was developed in Chapter 3 to segment cone-beam computed tomography (CBCT) scans into the jaw, the teeth and background. To date, many different training strategies have been proposed in literature to train CNNs on medical images. However, it remains unclear which strategy yields the best segmentation performances. To elucidate on this topic, eight different CNN training strategies were evaluated and compared in Chapter 4. The CNNs described in this chapter were trained to segment anatomical structures in simulated and experimental cone-beam CT scans. These experiments demonstrated that the best strategy is generally to train three separate CNN and combine their segmentation results through majority voting. In Chapter 5, an approach is described to deal with strong metal artifacts in CBCT scans during the image segmentation step required during CAS. In particular, a mixed-scale dense convolutional neural network (MS-D network) was implemented to segment the bony structures of the mandible and the maxilla. The MS-D network described in this chapter clearly outperformed a widely-used clinical segmentation method (i.e., snake evolution), and produced comparable results to alternative deep learning benchmarks. A novel symmetry-aware deep learning approach is proposed in Chapter 6 to reduce high cone-angle artifacts in CB CT images. In this approach, a CNN was trained using radial cone-beam CT slices to exploit the symmetry of high cone-angle artifacts. This allowed training a CNN to reduce the complex 3D cone-angle artifacts using only 2D slices as input. In this chapter, it is demonstrated that this symmetry-aware dimensionality reduction improves the performance and robustness of CNNs when reducing high cone-angle artifacts in cone-beam CT scans. Deep learning, and specifically CNNs, have found remarkable successes in various mage processing tasks. The goal of Chapter 7 was to review all studies that have been published in which neural network approaches were developed for CT image reconstruction, bone segmentation, and surgical planning. Although various neural network approaches were identified, the majority of studies (66%) applied CNNs. Interestingly, all of these CNNs were published from 2016 onwards, indicating the rapid paradigm shift this field has undergone. Nevertheless, much research is still required to make deep learning an integral part of the CAS workflow. In conclusion, this thesis contributes to enhance the understanding of CNN approaches for medical image processing. Nevertheless, many interesting challenges and questions remain to incorporate CNNā€™s as an integral part of the maxillofacial CAS routine. I therefore hope that this thesis will inspire fellow researchers to take on these exciting challenges

    Jomigi/Cone_angle_artifact_reduction: [1.0.0] - 2021.12.12

    No full text
    Source code used to produce the results of : Minnema et al., Efficient high cone-angle artifact reduction in circular cone-beam CT using deep learning with geometry-aware dimension reduction (2021

    Jomigi/Cone_angle_artifact_reduction: [1.0.0] - 2021.12.12

    No full text
    Source code used to produce the results of : Minnema et al., Efficient high cone-angle artifact reduction in circular cone-beam CT using deep learning with geometry-aware dimension reduction (2021

    Comparison of convolutional neural network training strategies for cone-beam CT image segmentation

    Get PDF
    Background and objective: Over the past decade, convolutional neural networks (CNNs) have revolutionized the field of medical image segmentation. Prompted by the developments in computational resources and the availability of large datasets, a wide variety of different two-dimensional (2D) and threedimensional (3D) CNN training strategies have been proposed. However, a systematic comparison of the impact of these strategies on the image segmentation performance is still lacking. Therefore, this study aimed to compare eight different CNN training strategies, namely 2D (axial, sagittal and coronal slices), 2.5D (3 and 5 adjacent slices), majority voting, randomly oriented 2D cross-sections and 3D patches. Methods: These eight strategies were used to train a U-Net and an MS-D network for the segmentation of simulated cone-beam computed tomography (CBCT) images comprising randomly-placed non-overlapping cylinders and experimental CBCT images of anthropomorphic phantom heads. The resulting segmentation performances were quantitatively compared by calculating Dice similarity coefficients. In addition, all segmented and gold standard experimental CBCT images were converted into virtual 3D models and compared using orientation-based surface comparisons. Results: The CNN training strategy that generally resulted in the best performances on both simulated and experimental CBCT images was majority voting. When employing 2D training strategies, the segmentation performance can be optimized by training on image slices that are perpendicular to the predominant orientation of the anatomical structure of interest. Such spatial features should be taken into account when choosing or developing novel CNN training strategies for medical image segmentation. Conclusions: The results of this study will help clinicians and engineers to choose the most-suited CNN training strategy for CBCT image segmentation. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )Peer reviewe

    Physiologically based pharmacokinetic modeling of intravenously administered nanoformulated substances

    No full text
    AbstractThe use of nanobiomaterials (NBMs) is becoming increasingly popular in the field of medicine. To improve the understanding on the biodistribution of NBMs, the present study aimed to implement and parametrize a physiologically based pharmacokinetic (PBPK) model. This model was used to describe the biodistribution of two NBMs after intravenous administration in rats, namely, poly(alkyl cyanoacrylate) (PACA) loaded with cabazitaxel (PACA-Cbz), and LipImageā„¢ 815. A Bayesian parameter estimation approach was applied to parametrize the PBPK model using the biodistribution data. Parametrization was performed for two distinct dose groups of PACA-Cbz. Furthermore, parametrizations were performed three distinct dose groups of LipImageā„¢ 815, resulting in a total of five different parametrizations. The results of this study indicate that the PBPK model can be adequately parametrized using biodistribution data. The PBPK parameters estimated for PACA-Cbz, specifically the vascular permeability, the partition coefficient, and the renal clearance rate, substantially differed from those of LipImageā„¢ 815. This emphasizes the presence of kinetic differences between the different formulations and substances and the need of tailoring the parametrization of PBPK models to the NBMs of interest. The kinetic parameters estimated in this study may help to establish a foundation for a more comprehensive database on NBM-specific kinetic information, which is a first, necessary step towards predictive biodistribution modeling. This effort should be supported by the development of robust in vitro methods to quantify kinetic parameters. Graphical abstract</jats:p

    CT image segmentation of bone for medical additive manufacturing using a convolutional neural network

    Get PDF
    Background: The most tedious and time-consuming task in medical additive manufacturing (AM) is image segmentation. The aim of the present study was to develop and train a convolutional neural network (CNN) for bone segmentation in computed tomography (CT) scans.Method: The CNN was trained with CT scans acquired using six different scanners. Standard tessellation language (STL) models of 20 patients who had previously undergone craniotomy and cranioplasty using additively manufactured skull implants served as ā€œgold standardā€ models during CNN training. The CNN segmented all patient CT scans using a leave-2-out scheme. All segmented CT scans were converted into STL models and geometrically compared with the gold standard STL models.Results: The CT scans segmented using the CNN demonstrated a large overlap with the gold standard segmentation and resulted in a mean Dice similarity coefficient of 0.92 Ā± 0.04. The CNN-based STL models demonstrated mean surface deviations ranging between āˆ’0.19 mm Ā± 0.86 mm and 1.22 mm Ā± 1.75 mm, when compared to the gold standard STL models. No major differences were observed between the mean deviations of the CNN-based STL models acquired using six different CT scanners.Conclusions: The fully-automated CNN was able to accurately segment the skull. CNNs thus offer the opportunity of removing the current prohibitive barriers of time and effort during CT image segmentation, making patient-specific AM constructs more accesible.</p

    A review on the application of deep learning for CT reconstruction, bone segmentation and surgical planning in oral and maxillofacial surgery

    Get PDF
    Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: (1) CT image reconstruction, (2) bone segmentation, and (3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional NNs were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed
    corecore